140 stories

Exploring Exploration: Comparing Children with RL Agents in Unified Environments

1 Share

Despite recent advances in artificial intelligence (AI) research, human children are still by far the best learners we know of, learning impressive skills like language and high-level reasoning from very little data. Children’s learning is supported by highly efficient, hypothesis-driven exploration: in fact, they explore so well that many machine learning researchers have been inspired to put videos like the one below in their talks to motivate research into exploration methods. However, because applying results from studies in developmental psychology can be difficult, this video is often the extent to which such research actually connects with human cognition.

A time-lapse of a baby playing with toys. Source.

Why is directly applying research from developmental psychology to problems in AI so hard? For one, taking inspiration from developmental studies can be difficult because the environments that human children and artificial agents are typically studied in can be very different. Traditionally, reinforcement learning (RL) research takes place in grid-world-like settings or other 2D games, whereas children act in the real world which is rich and 3-dimensional. Furthermore, comparisons between children and AI agents are difficult to make because the experiments are not controlled and often have an objective mismatch; much of the developmental psychology research with children takes place with children engaged in free exploration, whereas a majority of research in AI is goal-driven. Finally, it can be hard to ‘close the loop’, and not only build agents inspired by children, but learn about human cognition from outcomes in AI research. By studying children and artificial agents in the same, controlled, 3D environment we can potentially alleviate many of these problems above, and ultimately progress research in both AI and cognitive science.

In collaboration with Jessica Hamrick and Sandy Huang from DeepMind, and Deepak Pathak, Pulkit Agrawal, John Canny, Alexei A. Efros, Jeffrey Liu, and Alison Gopnik from UC Berkeley, that’s exactly what this work aims to do. We have developed a platform and framework for directly contrasting agent and child exploration based on DeepMind Lab – a first person 3D navigation and puzzle-solving environment originally built for testing agents in mazes with rich visuals.

What do we actually know about how children explore?

The main thing that we know about the child exploration is that children form hypotheses about how the world works, and they engage in exploration to test those hypotheses. For example, studies such as the one from Liz Bonawitz et al., 2007 showed us that preschoolers’ exploratory play is affected by the evidence they observe. They conclude that if it seems like there are multiple ways that a toy could work but it’s not clear which one is right (in other words, the evidence is causally confounded) then children engage in hypothesis-driven exploration and will explore the toy for significantly longer than when the dynamics and outcome are simple (in which case they would quickly move on to a new toy).

Stahl and Feigneson et al., 2015 showed us that when babies as young as 11-months are presented with objects that violate physical laws in their environments they will explore them more and even engage in hypothesis-testing behaviors that reflect the particular kind of violation seen. For example, if they see a car floating in the air (as in the video on the left), they find this surprising; subsequently, children then bang the toy on the table to explore how it works. In other words, these violations guide the children’s exploration in a meaningful way.

How do AI agents explore?

Classic work in computer science and AI focused on developing search methods that try to seek out a goal. For example, a depth-first search strategy will continue exploring down a particular path until either the goal or a dead-end is reached. If a dead-end is reached, it will backtrack until the next unexplored path is found and then proceed down that path. However, unlike children’s exploration, methods like these don’t have a notion of exploring more given surprising evidence, gathering information, or testing hypotheses. More recent work in RL has seen the development of other types of exploration algorithms. For example, intrinsic motivation methods provide a bonus for exploring interesting regions, such as those that have not been visited as much previously or those which are surprising. While these seem in principle more similar to children’s exploration, they are typically used more to expose agents to a diverse set of experience during training, rather than to support rapid learning and exploration at decision time.

Experimental setup

To alleviate some of the difficulties mentioned before in regards to applying results from developmental psychology to AI research, we develop a setup in which we have an exact comparison between children and agent behavior in the exact same environment with exactly the same observations and maps. We do this using DeepMind Lab, an existing platform for training and evaluating RL agents. Moreover, we can restrict the action space in DeepMind Lab to four simple actions (forward, backward, turn left, and turn right) using custom controllers built in the lab, which make it easier for children to navigate around the environment. Finally, in DeepMind lab we can procedurally generate a huge amount of training data that we can use to bring agents up-to-speed on common concepts like “wall” and “goal”.

In the picture below you can see an overview of the parts of our experimental setup. On the left, we see what it looks like when a child is exploring the maze using the controller and the 4 possible actions they can take. In the middle, we see what the child is seeing while navigating through the maze, and on the right there is an aerial view of the child’s overall trajectory in the maze.

Experimental results

We first investigated whether or not children (ages 4-5) that are naturally more curious/exploratory in a maze are better able to succeed at finding a goal later introduced at a random location. To test this, we have children explore the maze for 3 minutes without any specific instructions or goals, and then introduce a goal (represented as a gummy) in the same maze and then ask the children to find the gummy. We measure both 1) the percentage of the maze explored in the free exploration part of the task and 2) how long, in terms of number of steps, it takes the children to find the gummy after it is introduced.

We breakdown the percentage of maze explored into 3 groups: low, medium and high explorers. The low explorers explored around 22% of the maze on average, the medium explorers explored around 44% of the maze on average, and the high explorers explored around 71% of the maze on average. What we find is that the less exploring the child did in the first part of the maze, the more steps it took them to reach the gummy. This result can be visualized in the bar chart on the left, where the Y-axis represents the number of steps it took them to find the gummy, and the X-axis represents the explorer type. This data suggests a trend that higher explorers are more efficient at finding a goal in this maze setting.

Now that we have data from the children in the mazes, How do we measure the difference between an agent and a human trajectory? One method is to measure if the child’s actions are “consistent” with that of a specific exploration technique. Given a specific state or observation, a human or agent has a set of actions that can be taken from this state. If the child takes the same action that an agent would take, we call the action choice ‘consistent’. Otherwise the action would not be consistent. We measure the percentage of states in a child’s trajectory where the action taken by the child is consistent with an algorithmic approach. We call this percentage “choice-consistency.”

One of the simplest algorithmic exploration techniques is to do a systematic search method called depth-first search (DFS). From our task, recall that children had a maze in which they first engaged in free exploration, and then a goal-directed search. When we compare the consistency of the children’s trajectories in those 2 settings with those of the DFS algorithm, we find that kids in the free exploration setting take actions that are consistent with DFS only 90% of the time, whereas in the goal oriented setting they match DFS 96% of the time.

One way to interpret this result is that kids in the goal-oriented setting are taking actions that are more systematic than in free exploration. In other words, the children look more like a search algorithm when they are given a goal.

Conclusion and future work

In conclusion, this work only begins to touch on a number of deep questions regarding how children and agents explore. The experiments presented here just begin to address questions regarding how much children and agents are willing to explore; whether free versus goal-directed exploration strategies differ; and how reward shaping affects exploration. Yet, our setup allows us to ask so many more questions, and we have concrete plans to do so.

While DFS is a great first baseline for uninformed search, to enable a better comparison, a next step is to compare the children’s trajectories to other classical search algorithms and to that of RL agents from the literature. Further, even the most sophisticated methods for exploration in RL tend to explore only in the service of a specific goal, and are usually driven by error rather than seeking information. Properly aligning the objectives of RL algorithms with those of an exploratory child is an open question.

We believe that to truly build intelligent agents, they must do as children do: actively explore their environments, perform experiments, and gather information to weave together into a rich model of the world. In this direction, we will be able to gain a deeper understanding of the way that children and agents explore novel environments, and how to close the gap between them.

This blog post is based on the following paper:

  • Exploring Exploration: Comparing Children with RL Agents in Unified Environments.
    Eliza Kosoy, Jasmine Collins, David M. Chan, Sandy Huang, Deepak Pathak, Pulkit Agrawal, John Canny, Alison Gopnik, and Jessica B. Hamrick
    arXiv preprint arXiv:2005.02880 (2020)
Read the whole story
7 days ago
Share this story

How Earth’s Climate Changes Naturally (and Why Things Are Different Now)

1 Share

Earth has been a snowball and a hothouse at different times in its past. So if the climate changed before humans, how can we be sure we’re responsible for the dramatic warming that’s happening today?

In part it’s because we can clearly show the causal link between carbon dioxide emissions from human activity and the 1.28 degree Celsius (and rising) global temperature increase since preindustrial times. Carbon dioxide molecules absorb infrared radiation, so with more of them in the atmosphere, they trap more of the heat radiating off the planet’s surface below.

But paleoclimatologists have also made great strides in understanding the processes that drove climate change in Earth’s past. Here’s a primer on 10 ways climate varies naturally, and how each compares with what’s happening now.

Solar Cycles

Magnitude: 0.1 to 0.3 degrees Celsius of cooling

Time frame: 30- to 160-year downturns in solar activity separated by centuries

Every 11 years, the sun’s magnetic field flips, driving an 11-year cycle of solar brightening and dimming. But the variation is small and has a negligible impact on Earth’s climate.

More significant are “grand solar minima,” decades-long periods of reduced solar activity that have occurred 25 times in the last 11,000 years. A recent example, the Maunder minimum, which occurred between 1645 and 1715, saw solar energy drop by 0.04% to 0.08% below the modern average. Scientists long thought the Maunder minimum might have caused the “Little Ice Age,” a cool period from the 15th to the 19th century; they’ve since shown it was too small and occurred at the wrong time to explain the cooling, which probably had more to do with volcanic activity.

The sun has been dimming slightly for the last half-century while the Earth heats up, so global warming cannot be blamed on the sun.

Volcanic Sulfur

Magnitude: Approximately 0.6 to 2 degrees Celsius of cooling

Time frame: 1 to 20 years

In the year 539 or 540 A.D., the Ilopango volcano in El Salvador exploded so violently that its eruption plume reached high into the stratosphere. Cold summers, drought, famine and plague devastated societies around the world.

Eruptions like Ilopango’s inject the stratosphere with reflective droplets of sulfuric acid that screen sunlight, cooling the climate. Sea ice can increase as a result, reflecting more sunlight back to space and thereby amplifying and prolonging the global cooling.

Ilopango triggered a roughly 2 degree Celsius drop that lasted 20 years. More recently, the eruption of Pinatubo in the Philippines in 1991 cooled the global climate by 0.6 degrees Celsius for 15 months.

Volcanic sulfur in the stratosphere can be disruptive, but in the grand scale of Earth’s history it’s tiny and temporary.

Short-Term Climate Fluctuations

Magnitude: Up to 0.15 degrees Celsius

Time frame: 2 to 7 years

On top of seasonal weather patterns, there are other short-term cycles that affect rainfall and temperature. The most significant, the El Niño–Southern Oscillation, involves circulation changes in the tropical Pacific Ocean on a time frame of two to seven years that strongly influence rainfall in North America. The North Atlantic Oscillation and the Indian Ocean Dipole also produce strong regional effects. Both of these interact with the El Niño–Southern Oscillation.

The interconnections between these cycles used to make it hard to show that human-caused climate change was statistically significant and not just another lurch of natural variability. But anthropogenic climate change has since gone well beyond natural variability in weather and seasonal temperatures. The U.S. National Climate Assessment in 2017 concluded that there’s “no convincing evidence for natural cycles in the observational record that could explain the observed changes in climate.”

Orbital Wobbles

Magnitude: Approximately 6 degrees Celsius in the last 100,000-year cycle; varies through geological time

Time frame: Regular, overlapping cycles of 23,000, 41,000, 100,000, 405,000 and 2,400,000 years

Earth’s orbit wobbles as the sun, the moon and other planets change their relative positions. These cyclical wobbles, called Milankovitch cycles, cause the amount of sunlight to vary at middle latitudes by up to 25% and cause the climate to oscillate. These cycles have operated throughout time, yielding the alternating layers of sediment you see in cliffs and road cuts.

During the Pleistocene epoch, which ended about 11,700 years ago, Milankovitch cycles sent the planet in and out of ice ages. When Earth’s orbit made northern summers warmer than average, vast ice sheets across North America, Europe and Asia melted; when the orbit cooled northern summers, those ice sheets grew again. Since warmer oceans dissolve less carbon dioxide, atmospheric carbon dioxide levels rose and fell in concert with these orbital wobbles, amplifying their effects.

Today Earth is approaching another minimum of northern sunlight, so without human carbon dioxide emissions we would be heading into another ice age within the next 1,500 years or so.

Faint Young Sun

Magnitude: No net temperature effect

Time frame: Constant

Though the sun’s brightness fluctuates on shorter timescales, it brightens overall by 0.009% per million years, and it has brightened by 48% since the birth of the solar system 4.5 billion years ago.

Scientists reason that the faintness of the young sun should have meant that Earth remained frozen solid for the first half of its existence. But, paradoxically, geologists have found 3.4-billion-year-old rocks that formed in wave-agitated water. Earth’s unexpectedly warm early climate is probably explained by some combination of less land erosion, clearer skies, a shorter day and a peculiar atmospheric composition before Earth had an oxygen-rich atmosphere.

Clement conditions in the second half of Earth’s existence, despite a brightening sun, do not create a paradox: Earth’s weathering thermostat counteracts the effects of the extra sunlight, stabilizing Earth’s temperature (see next section).

Carbon Dioxide and the Weathering Thermostat

Magnitude: Counteracts other changes

Time frame: 100,000 years or longer

The main control knob for Earth’s climate through deep time has been the level of carbon dioxide in the atmosphere, since carbon dioxide is a long-lasting greenhouse gas that blocks heat that tries to rise off the planet.

Volcanoes, metamorphic rocks and the oxidization of carbon in eroded sediments all emit carbon dioxide into the sky, while chemical reactions with silicate minerals remove carbon dioxide and bury it as limestone. The balance between these processes works as a thermostat, because when the climate warms, chemical reactions become more efficient at removing carbon dioxide, putting a brake on the warming. When the climate cools, reactions become less efficient, easing the cooling. Consequently, over the very long term, Earth’s climate has remained relatively stable, providing a habitable environment. In particular, average carbon dioxide levels have declined steadily in response to solar brightening.

However, the weathering thermostat takes hundreds of thousands of years to react to changes in atmospheric carbon dioxide. Earth’s oceans can act somewhat faster to absorb and remove excess carbon, but even that takes millennia and can be overwhelmed, leading to ocean acidification. Each year, the burning of fossil fuels emits about 100 times more carbon dioxide than volcanoes emit — too much too fast for oceans and weathering to neutralize it, which is why our climate is warming and our oceans are acidifying.

Plate Tectonics

Magnitude: Roughly 30 degrees Celsius over the past 500 million years

Time frame: Millions of years

The rearrangement of land masses on Earth’s crust can slowly shift the weathering thermostat to a new setting.

The planet has generally been cooling for the last 50 million years or so, as plate tectonic collisions thrust up chemically reactive rock like basalt and volcanic ash in the warm, wet tropics, increasing the rate of reactions that draw carbon dioxide from the sky. Additionally, over the last 20 million years, the building of the Himalayas, Andes, Alps and other mountains has more than doubled erosion rates, boosting weathering. Another contributor to the cooling trend was the drifting apart of South America and Tasmania from Antarctica 35.7 million years ago, which initiated a new ocean current around Antarctica. This invigorated ocean circulation and carbon dioxide–consuming plankton; Antarctica’s ice sheets subsequently grew substantially.

Earlier, in the Jurassic and Cretaceous periods, dinosaurs roamed Antarctica because enhanced volcanic activity, in the absence of those mountain chains, sustained carbon dioxide levels around 1,000 parts per million, compared to 415 ppm today. The average temperature of this ice-free world was 5 to 9 degrees Celsius warmer than now, and sea levels were around 250 feet higher.

Asteroid Impacts

Magnitude: Approximately 20 degrees Celsius of cooling followed by 5 degrees Celsius of warming (Chicxulub)

Time frame: Centuries of cooling, 100,000 years of warming (Chicxulub)

The Earth Impact Database recognizes 190 craters with confirmed impact on Earth so far. None had any discernable effect on Earth’s climate except for the Chicxulub impact, which vaporized part of Mexico 66 million years ago, killing off the dinosaurs. Computer modeling suggests that Chicxulub blasted enough dust and sulfur into the upper atmosphere to dim sunlight and cool Earth by more than 20 degrees Celsius, while also acidifying the oceans. The planet took centuries to return to its pre-impact temperature, only to warm by a further 5 degrees Celsius, due to carbon dioxide in the atmosphere from vaporized Mexican limestone.

How or whether volcanic activity in India around the same time as the impact exacerbated the climate change and mass extinction remains controversial.

Evolutionary Changes

Magnitude: Depends on event; about 5 degrees Celsius cooling in late Ordovician (445 million years ago)

Time frame: Millions of years

Occasionally, the evolution of new kinds of life has reset Earth’s thermostat. Photosynthetic cyanobacteria that arose some 3 billion years ago, for instance, began terraforming the planet by emitting oxygen. As they proliferated, oxygen eventually rose in the atmosphere 2.4 billion years ago, while methane and carbon dioxide levels plummeted. This plunged Earth into a series of “snowball” climates for 200 million years. The evolution of ocean life larger than microbes initiated another series of snowball climates 717 million years ago — in this case, it was because the organisms began raining detritus into the deep ocean, exporting carbon from the atmosphere into the abyss and ultimately burying it.

When the earliest land plants evolved about 230 million years later in the Ordovician period, they began forming the terrestrial biosphere, burying carbon on continents and extracting land nutrients that washed into the oceans, boosting life there, too. These changes probably triggered the ice age that began about 445 million years ago. Later, in the Devonian period, the evolution of trees further reduced carbon dioxide and temperatures, conspiring with mountain building to usher in the Paleozoic ice age.

Large Igneous Provinces

Magnitude: Around 3 to 9 degrees Celsius of warming

Time frame: Hundreds of thousands of years

Continent-scale floods of lava and underground magma called large igneous provinces have ushered in many of Earth’s mass extinctions. These igneous events unleashed an arsenal of killers (including acid rain, acid fog, mercury poisoning and destruction of the ozone layer), while also warming the planet by dumping huge quantities of methane and carbon dioxide into the atmosphere more quickly than the weathering thermostat could handle.

In the end-Permian event 252 million years ago, which wiped out 81% of marine species, underground magma ignited Siberian coal, drove up atmospheric carbon dioxide to 8,000 parts per million and raised the temperature by between 5 and 9 degrees Celsius. The more minor Paleocene-Eocene Thermal Maximum event 56 million years ago cooked methane in North Atlantic oil deposits and funneled it into the sky, warming the planet by 5 degrees Celsius and acidifying the ocean; alligators and palms subsequently thrived on Arctic shores. Similar releases of fossil carbon deposits happened in the end-Triassic and the early Jurassic; global warming, ocean dead zones and ocean acidification resulted.

If any of that sounds familiar, it’s because human activity is causing the same effects today.

As a team of researchers studying the end-Triassic event wrote in April in Nature Communications, “Our estimates suggest that the amount of CO2 that each … magmatic pulse injected into the end-Triassic atmosphere is comparable to the amount of anthropogenic emissions projected for the 21st century.”

Update: July 24, 2020
An earlier version of this article included a chart of carbon dioxide and oxygen concentrations through deep time. That chart was based on a single data source and does not reflect the best available modern evidence. It has been removed it from the article.

Read the whole story
15 days ago
Share this story

The Mechanics of Moral Change

1 Share

I’ve recently become fascinated by moral revolutions. As I have explained before, by “moral revolution” I mean a change in social beliefs and practices about rights, wrongs, goods and bads. I don’t mean a change in the overarching moral truth (if such a thing exists). Moral revolutions strike me as an important topic of study because history tells us that our moral beliefs and practices change, at least to some extent, and it is possible that they will do so again in the future. Can we plan for and anticipate future moral revolutions? That's what I am really interested in.

To get a handle on this question, we need to think about the dynamics of moral change. What is changing and how does it change? Recently, I’ve been reading up on the history and psychology of morality and this article is an attempt to distill, from that reading, some models for understanding the dynamics of moral change. Everything I say here is preliminary and tentative but it might be of interest to some readers.

1. The Mechanics of Morality: a Basic Picture

Let’s start at the most abstract level. What is morality? Philosophers will typically tell you that morality consists of two things: (i) a set of claims about what is and is not valuable (i.e. what is good/bad/neutral) and (ii) a set of claims about what is and is not permissible (i.e. what is right, wrong, forbidden, allowed etc).

Values are things we ought to promote and honour through our behaviour. They include things like pleasure, happiness, love, equality, freedom, well-being and so on. The list of things that are deemed valuable can vary from society to society and across different historical eras. For example, Ancient Greek societies, particularly in the Homeric era, placed significant emphasis on the value of battlefield bravery. Modern liberal societies tend to value the individual pursuit of happiness more than bravery on the battlefield. That said, don’t misinterpret this example. There are many shared values across time and space. Oftentimes the changes between societies are subtle, involving different priority rankings over shared values rather than truly different sets of values.

Rights and wrongs are the specific behavioural rules that we ought to follow. They are usually connected to values. Indeed, in some sense, values are the more fundamental moral variable. A society needs to figure out what it values first before it comes up with specific behavioural rules (though it may be possible that following specific rules causes you to change your values). These behavioural rules can also vary from society to society and across different historical eras. To give a controversial example, it seems that sexual relationships between older men and (teenage) boys were permissible, and even celebrated, in Ancient Greece. In modern liberal societies they are deemed impermissible.

So beliefs about what is good/bad and right/wrong are the fundamental moral variables. It follows that moral revolutions must consist, at a minimum, in changes in what people think is good/bad (additions, subtractions and reprioritisations of values) and right/wrong (new permissions, obligations, prohibitions and so on).

2. Our Moral Machinery

How could these things change? To start to answer this question, I suggest we develop a simple model of the human moral machine. By using the term “human moral machine” I mean to refer to the machine that generates our current moral beliefs and practices. How does that machine currently work? It’s only when we can answer this question that we will get a better sense of how things might change in the future. To be clear,I don’t think of this as a machine in the colloquial sense. It’s not like an iPhone or a laptop computer. It is, rather, a complex social-technical-biological mechanism, made up of objects, processes and functions. I hope no one will mind this terminological preference.

At its most fundamental level, the human moral machine is the human brain. The brain, after all, is the thing that generates our moral beliefs and practices. How does this happen? All brains are, in a sense, evaluative systems. They record sensory inputs and then determine the evaluative content of those inputs. Think about the brain of a creature like a slug. It probes the creature’s local environment identifying potential food sources (good), mates (good), toxic substances (bad) and predators (bad). The slug itself may not understand any of this — and it may not share the conceptual labels that we apply to its sensory inputs — but its brain is, nevertheless, constantly evaluating its surroundings. It then uses these evaluations to generate actions and behaviours. It often does this in a predictable way. In short, the brain of the slug generates rules for behaviour in response to evaluative content.

Humans brains are no different. They are also constantly evaluating their surroundings, categorising sensory inputs according to their evaluative content, and generating rules for action in response. Where humans differ from slugs is in the complexity of our evaluations and the diversity of the behavioural rules we follow. Some of our evaluations and rules are programmed into us as basic evolutionary responses; some we learn from our cultures and peers; some we learn through our own life experiences. It is through this process of evaluation and rule generation that we create moral beliefs and practices. This isn’t to say that moral beliefs and practices are simply reducible to brain-generated evaluations and rules. For one thing, not all such evaluations and rules attract the label “moral”. Moral values and rules are rather a subset of these things that take on a particular importance in human social life. They are evaluations and rules that are shared across a society and used as standards against which to criticise and punish conduct.

To say that the basic moral machine is the human brain is not to say that much. What we really want to know is whether the human brain tends to engage in certain kinds of predictable moral evaluation and rule generation. If it does, then there is some hope for developing a general model of moral change. If it doesn’t -- if evaluation and rule-generation is entirely random or too complex to reverse engineer -- then the prospects are pretty dim.

Should we be optimistic or pessimistic on this front? Although there are people who think there is a good deal of randomness and complexity to how our brains learn and adapt to the world, there are plenty of others who disagree and think there are predictable patterns to be discerned. This seems to be true even in the moral realm. Although the diversity of human moral systems is impressive, there is also some remarkable affinity across different cultures. Humans tend to share some very similar values across cultures and this can lead to very similar cross-cultural moral rules.

So I shall be optimistic for the time being and suggest that there are some simple, predictable forces at work in the human moral machine. In particular, I am going to suggest that evolutionary forces have given humans a basic moral conscience — i.e. a basic capacity for generating and adhering to moral norms — and that this moral conscience was an adaptive response to particular challenges faced by human societies in the past. In addition to this, I am going to suggest that this basic moral conscience is, in turn, honed and cultivated in each of our own, individual lives, in response to the cultures we grow up in and the particular experiences we have. The combination of these two things — evolved moral conscience plus individual moral development — is what gives us our current set of moral beliefs and practices and places constraints on our openness to moral change.

In the future, changes to our technologies, cultures and environments are likely to agitate this moral machinery and force it to generate new moral evaluations and rules. This model for understanding human moral change is illustrated below.

For the remainder of this post I will not say much about the future of morality. Instead, I will focus on how our moral consciences might have evolved and how they develop over the course of our own lives.

3. Our Evolved Conscience

I suspect there is no fully satisfactory definition of the term “moral conscience” but the one I prefer defines the conscience as an internalised rule or set of rules that humans believe they ought to follow. In other words, it is our internal sense of right and wrong.

In his book Moral Origins — which I will be referring to several times in what follows — Christopher Boehm argues that our conscience is an “internalised imperative” telling us that we ought to follow a particular rule or else. His claim is that this internalised imperative originally took the form of a conditional rule based on a desire to avoid social punishment:

Original Conscience: I ought to do X [because X is a socially cooperative behaviour and if I fail to do X I will be punished]

What happened over time was that the bit in the square brackets got dropped from how we mentally represent the imperative.

Modern Conscience: I ought to do X because it is the right thing to do.

This modern formulation gives moral rules a special mental flavour. To use the Kantian terminology, moral rules seem to take the form of categorical imperatives — rules that we have to follow — not simply rules that we should follow in order to achieve desirable results. Nevertheless, according to Boehm, the bit in the square brackets of the original formulation is crucial to understanding the evolutionary origins of moral conscience.

Most studies of the evolutionary origins of morality take the human instinct for prosociality and altruism as their starting point. They note that humans are much more altruistic than their closest relatives and try to figure out why. This makes sense. Although there is more to morality than altruism, it is fair to say that valuing the lives and well-being of other humans, and following altruistic norms, is one of the hallmarks of human morality. Boehm’s analysis of the origins of human moral conscience tries to capture this. The bit in the square brackets links moral conscience to our desire to fit in with our societies and cooperate with others.

So what gave rise to this cooperative, altruistic tendency? Presumably, the full answer to this is very complex; the simple answer focuses on two things in particular.

The first is that humans, due to their large brains, faced an evolutionary pressure to form close social bonds. How so? In her book Conscience, the philosopher Patricia Churchland explains it in the following way. She argues that it emerged from an evolutionary tradeoff between endothermy (internal generation of heat), flexible learning and infant dependency. Roughly:

  • Humans evolved to fill the cognitive niche, i.e. our evolutionary success was determined by our ability to use our brains, individually and collectively, to solve complex survival problems in changing environments. This meant that we evolved brains that do not follow lots of pre-programmed behavioural rules (like, for example, turtles) but, rather, brains that learn new behavioural rules in response to experiences.
  • In order to have this capacity for flexible learning, we needed to have big, relatively contentless brains. This meant that we had to be born relatively helpless. We couldn’t have all the know-how we needed to survive programmed into us from birth. We had to use experience to figure things out (obviously this isn’t the full picture but it seems more true of humans than other animals)
  • In addition to being relatively helpless at birth, our big brains were also costly in terms of energy expenditure. We needed a lot of fuel to keep them growing and developing.
  • All of this made humans very dependent on others from birth. In the first instance, this dependency manifested itself in mother-infant relationships, but then social and cultural forces selected for greater community care and investment in infants. Families and tribes all helped out to produce the food, shelter and clothing (and education and technology) needed to ensure the success of our offspring.
  • The net result was a positive evolutionary feedback loop. We were born highly dependent on others, which encouraged us to form close social bonds, and which encouraged others to invest a lot in our success and well-being. A complex set of moral norms concerning cooperation and group sharing emerged as a result.

This was the evolutionary seed for a moral conscience centering on altruism and prosociality.

I like Churchland’s theory because it highlights evolutionary pressures that are often neglected in the story of human morality. In particular, I like how she places biochemical constraints arising from the energy expenditure of the brain at the centre of her story about the origins of our moral conscience. This makes her story somewhat similar to that of Ian Morris, who makes different technologies of energy capture central to his story about the changes in human morality over the past 40,000 years. 

That said, Churchland’s story cannot be the full picture. As anyone will tell you, cooperation can yield great benefits, but it also has its costs. A group of humans working together, with the aid of simple technologies like spears or axes, can hunt for energy-rich food. They can get more of this food working together than they can individually. But cooperative efforts like this can be exploited by free-riders, who take more than they give to the group effort.

Two types of free riders played an important role in human history:

Deceptive Free Riders: People who pretended to cooperate but actually didn’t and yet still received a benefit from the group.
Bullying Free Riders: People who intimidated or violently suppressed others in order to take more than their fair share of the group spoils (e.g. the alpha male dominant in a group).

A lot of attention has been paid to the problem of deceptive free riders over the years, but Christopher Boehm suggests that the bullying free rider was probably a bigger problem in human evolutionary history. 

He derives evidence for this claim from two main sources. First, studies of modern hunter gatherer tribes suggest that members of these groups all seem to have a strong awareness of and sensitivity to bullying behaviour within their groups. They gossip about it and try to stamp it out as soon as they can. Second, a comparison with our ape brethren highlights that they are beset by problems with bullying alpha males who take more than their fair share. This is particularly true of chimpanzee groups. (It is less true, obviously, of bonobo groups where female alliances work to stamp out bullying behaviour. Richard Wrangham explains the differences between bonobos and chimps as being the result of different food and environmental scarcities in their evolutionary environments.)

As Boehm sees it, then, the only way that humans could develop a strong altruistic moral conscience was if they could solve the bully problem. How did they do this? The answer, according to Boehm, is through institutionalised group punishment, specifically group capital punishment of bullies. By themselves, bullies could dominate others. They were usually stronger and more aggressive and could use their physical capacity to get their way. But bullies could not dominate coalitions of others working together, particularly once those coalitions had access to the same basic technologies that enabled big-game hunting. Suddenly the playing field was levelled. If a coalition could credibly threaten to kill a bully, and if they occasionally carried out that threat, the bullies could be stamped out.

Boehm’s thesis, then, is that the capacity for institutionalised capital punishment established a strong social selective pressure in primitive human societies. Bullies could no longer get their way. They had to develop a capacity for self-control, i.e. to avoid expressing their bullying instincts in order to avoid the wrath of the group. They had to start caring about their moral reputations within a group. If they acquired a reputation for cheating or not following the group rules, they risked being ridiculed, ostracised and, ultimately, killed.

It is this capacity for self-control that developed into the moral conscience — the inner imperative telling us not to step out of line. As Boehm puts it:

We moved from being a “dominance obsessed” species that paid a lot of attention to the power of high-ranking others, to one that talked incessantly about the moral reputations of other group members, began to consciously define its more obvious social problems in terms of right and wrong, and as a routine matter began to deal collectively with the deviants in its bands. 
(Boehm, Moral Origins, p 177)

What’s the evidence for thinking that institutionalised punishment was key to developing our moral conscience? Boehm cites several strands of evidence but his most original comes from a cross cultural comparison of human hunter gatherer groups. He created a database of all studied human hunter gatherer groups and noted the incidence and importance of capital punishment in those societies. In short, although modern hunter gatherer groups don’t execute people very often, they do care a lot about moral reputations within groups and most have practiced or continue to practice capital punishment in some form or other.

Richard Wrangham, who is also a supporter of the institutionalised punishment thesis, cites other kinds of evidence for this view. In his book The Goodness Paradox he argues that human morality emerged from a process of self-domestication (akin to the process we see domesticated animals) and that we see evidence for this not just in the behaviour of humans but also in their physiology compared to their chimpanzee cousins (less sexual dimorphism, blunter teeth, less physical strength etc). It’s an interesting argument and he develops it in a very engaging way.

The bottom line for now, however, is that our moral conscience seems to have at least two evolutionary origin points. The first is our big brains and need for flexible learning: this made us dependent on others for long periods of our lives. The second is institutionalised punishment: this created a strong social selective pressure to care about reputation within a group and to favour conformity with group rules.

Understanding these origin points is important because it tells us something about the forces that are likely to alter our moral beliefs and practices in the future. Most humans have a tendency for groupishness, we care about our reputations within our groups and we often try to conform with group expectations. That said, we are not sheep. Our brains often look for loopholes in group rules, trying to exploit things to our advantage. So we are sensitive to the opinions of others and wary of the threat of punishment, but we are willing to break the rules if the cost-benefit ratio is in our favour. This tells us that if we want to change moral beliefs and practices, an obvious way to do this is by manipulating group reputational norms and punishment practices.

4. Our Developed Conscience

So much for the general evolutionary forces shaping our moral conscience. There are obviously some individual differences too. We learn different behavioural rules in different social groups and through different life experiences. We are also, each of us, somewhat different with respect to our personalities and hence our inclinations to follow moral rules.

It would be impossible to review all the forces responsible for these individual differences in this article, but I will mention two important ones in what follows: (i) our basic norm-learning algorithm and (ii) personality types. I base my description of them largely on Patricia Churchland’s discussion in Conscience.

First, let’s talk about how we learn moral rules. Pioneering studies done by the neuroscientists Read Montague and Terry Sejnowski suggest that the human brain follows a basic learning algorithm known as the “reward-prediction-error” algorithm (now popularised as "reinforcement learning" in artificial intelligence research). It works like this (roughly):

  • The brain is constantly active and neurons in the brain have a base rate firing pattern. This base rate firing pattern is essentially telling the brain that nothing unexpected is happening in the world around it.
  • When there is a spike in the firing pattern this is because something unexpectedly good happens (i.e. the brain experiences a “reward”)
  • When there is a drop in the firing pattern this is because something unexpectedly bad happens (i.e. the brain experiences a “punishment”)

This natural variation in firing is exploited by different learning processes. Consider classical conditioning. This is where the brain learns to associate another signal with the presentation of a reward. In the standard example, a dog learns to associate the ringing of a bell with the presentation of food. In classical conditioning, the brain is switching the spike in neural firing from the presentation of the reward to the stimulus that predicts the reward (the ringing of the bell). In other words, the brain links the stimulus with the reward in such a way that it spikes its firing rate in anticipation of the reward. If it makes a mistake, i.e. the spike in firing does not predict the reward, then it learns to dissociate the stimulus with the presentation of the reward. In short, whenever there is a violation of what the brain expects (whenever there is an "error"), there is a change in the brain's firing rate, and this is used to learn new associations.

It turns out that this basic learning algorithm can also help to explain how humans learn moral rules. Our understanding of shared social norms guides our expectations of the social world. We expect people to follow the social norms and when they do not this is surprising. It seems plausible to suppose that we learn new social norms by keeping track of the norms we expected people to follow.

This has been studied experimentally. Xiang, Lohrenz and Montague performed a lab study to see if groups of people playing the Ultimatum Game learned new norms of gameplay by following the reward-prediction-error process. It turns out they did.

The Ultimatum Game is a simple game in which one player (A) is given a sum of money to divide between himself and another player (B). The rule of the game is that player A can propose whatever division of the money he prefers and player B can either accept this division or reject it (in which case both players get nothing). Typically, humans tend to favour a roughly egalitarian split of the money. Indeed, if the first player proposes an unequal split of the money, the second player tends to punish this by rejecting the offer. That said, there is some cross-cultural variation and, under the right conditions, humans can learn to favour a less egalitarian split.

Xiang, Lohrenz and Montague ran the experiment like this:

  • They had two different types of experimental subjects: donors, who would propose different divisions of $20, and responders, who would accept or reject these divisions.
  • They then ran multiple rounds of the Ultimatum game (60 in total). They split responders into two different groups in the process. Group one would run through a sequence of games that started with donors offering very low (inegalitarian) sums but ending up with high (egalitarian) ones. Group two would run through the opposite sequence, starting with high offers and ending with low ones.
  • In other words, responders in group one were trained to expect unequal divisions initially and then for this to change, while those in group two were trained to expect equal divisions and then for this to change.

The researchers found that, under these circumstances, the responders’ brains seemed to follow a learning process similar to that of reward-prediction-error, something they called “norm prediction error”. In this learning process, the violation of a norm is perceived, by the brain, as an error. This can be manipulated in order to train people to adapt to new norms.

One of the particularly interesting features of this experiment was how the different groups of responders perceived the morality of the different divisions. At round 31 of the game, both sets of responders received the exact same offer: nine dollars. Those in group one (the low-to-high offer group) thought that this was great because it was more generous than they were initially trained to expect (bearing in mind their background cultural norms, which were to expect a fair division). Those in group two thought it was not so great since it was less generous than they had been trained to expect.

The important point about this experiment is that it tell us something about how norms shape our expectations and hence affect the changeability of our moral beliefs and practices. We all become habituated to a certain normative baseline in the course of our own lives. Nevertheless, with the right sequence of environmental stimuli it’s possible that, within certain limits, our norms can shift quite rapidly (Churchland argues that fashion norms are a good example of this).

The other point that is worth mentioning now is how individual personality type can affect our moral conscience. Churchland uses the Big Five personality type model (openness, conscientiousness, extroversion-introversion, agreeableness and neuroticism) to explain this. This is commonly used in psychology. She notes that where we fall on the spectrum with respect to these five traits affects how we interact with and respond to moral norms. For example, those who are more extroverted, agreeable and open can be easier to shift from their moral baseline. Those who are more conscientious and neurotic can be harder to shift.

She also offers an interesting hypothesis. She argues that there are two extreme moral personality types:

Psychopaths: These are people that appear to lack a moral conscience. They often know what social morality demands of them but they lack any emotional attachment to the social moral rules. They do not experience them as painful violations of the moral order. These people have an essentially amoral experience of the world (though they can act in what we would call “immoral” ways).
Scrupulants: These are people that have a rigid and inflexible approach to moral rules (possibly rooted in a desire to minimise chaos and uncertainty). They often follow moral rules to their extremes, sometimes neglecting family, friends and themselves in the process. They are almost too moral in their experience of the world. They are overly attached to moral rules.

Identifying these extremes is useful, not only because we sometimes have to deal with psychopaths and scrupulants, but also because we all tend to fall somewhere between these two extremes. Some of us are more attached to existing moral norms than others. Knowing where we all lie on the spectrum is crucial if we are going to understand the dynamics of moral change. (It may also be the case that it is those who lie at the extremes that lead moral revolutions. This is something I suggested in an earlier essay on why we should both hate and love moralists).

5. Conclusion

In summary, moral change is defined by changes in what we value and what we perceive to be right and wrong. The mechanism responsible for this change is, ultimately, the human brain since it is the organ that creates and sustains moral beliefs. But the moral beliefs created and sustained by the human brain are a product of evolution and personal experience.

Evolutionary forces appear to have selected for proscial, groupish tendencies among humans: most of us want to follow social moral norms and, perhaps more crucially, be perceived to be good moral citizens. That said, most of us are also moral opportunists, open to bending and breaking the rules under the right conditions.

Personal experience shapes the exact moral norms we follow. We learn normative baselines from our communities, and we find deviations from these baselines surprising. We can learn new moral norms, but only under the right circumstances. Furthermore, our susceptibility to moral change is determined, in part, by our personalities. Some people are more rigid and emotionally attached to moral rules; some people are more flexible and open to change.

These are all things to keep in mind when we consider the dynamics of moral revolutions.

Read the whole story
15 days ago
Share this story

So Much for the Decentralized Internet

1 Share

Kanye West, Elon Musk, Bill Gates, and Barack Obama were all feeling generous on the evening of July 16, according to their Twitter accounts, which offered to double any payments sent to them in bitcoin. Not really, of course; they’d been hacked. Or, rather, Twitter itself had been hacked, and for apparently stupid reasons: The perpetrators stole and resold Twitter accounts and impersonated high-follower users to try to scam people out of cryptocurrency.

“The attack was not the work of a single country like Russia,” Nathaniel Popper and Kate Conger reported at The New York Times. “Instead, it was done by a group of young people … who got to know one another because of their obsession with owning early or unusual screen names.” The hackers gained access to Twitter’s tools and network via a “coordinated social engineering attack,” as Twitter’s customer-support account called it—a fancy way of admitting that their employees got played. All told, 130 accounts were compromised. “We feel terrible about the security incident,” Twitter CEO Jack Dorsey said last week, in prepared remarks on an earnings call.

The hack makes Twitter look incompetent, and at a bad time; its advertising revenues are falling, and the company is scrambling to respond. It also underscores the impoverished cybersecurity at tech firms, which provide some employees with nearly limitless control over user accounts and data—as many as 1,000 Twitter employees reportedly had access to the internal tools that were compromised. But the stakes are higher, too. Though much smaller than Facebook in terms of its sheer number of users, Twitter is where real-time information gets published online, especially on news and politics, from a small number of power users. That makes the service’s vulnerability particularly worrisome; it has become an infrastructure for live information. The information itself had already become weaponized; now it’s clear how easily the actual accounts publishing that information can be compromised too. That’s a terrifying prospect, especially in the lead-up to the November U.S. presidential election featuring an incumbent who uses Twitter obsessively, and dangerously. It should sound the internet equivalent of civil-defense sirens.

Like many “verified” Twitter users who compose its obsessive elite, I was briefly unable to tweet as the hack played out, Twitter having taken extreme measures to try to quell the chaos. I updated my password, a seemingly reasonable thing to do amid a security breach. Panicked, Twitter would end up locking accounts that had attempted to change their password in the past 30 days. A handful of my Atlantic colleagues had done the same and were similarly frozen out. We didn’t know that at the time, however, and the ambiguity brought delusions of grandeur (Am I worthy of hacking?) and persecution (My Twitterrrrrr!). After less than a day, most of us got our accounts back, albeit not without the help of one of our editors, who contacted Twitter on our behalf.

[Read: Twitter’s least-bad option for dealing with Donald Trum]p

The whole situation underscores how centralized the internet has become: According to the Times report, one hacker secured entry into a Slack channel. There, they found credentials to access Twitter’s internal tools, which they used to hijack and resell accounts with desirable usernames, before posting messages on high-follower accounts in an attempt to defraud bystanders. At The Atlantic, those of us caught in the crossfire were able to quickly regain access to the service only because we work for a big media company with a direct line to Twitter personnel. The internet was once an agora for the many, but those days are long gone, even if everyone can tweet whatever they want all the time.

It’s ironic that centralization would overtake online services, because the internet was invented to decentralize communications networks—specifically to allow such infrastructure to survive nuclear attack.

In the early 1960s, the commercial telephone network and the military command-and-control network were at risk. Both used central switching facilities that routed communications to their destinations, kind of like airport hubs. If one or two of those facilities were to be lost to enemy attack, the whole system would collapse. In 1962, Paul Baran, a researcher at RAND, had imagined a possible solution: a network of many automated nodes that would replace the central switches, distributing their responsibility throughout the network.

The following year, J. C. R. Licklider, a computer scientist at the Pentagon’s Advanced Research Projects Agency (ARPA), conceived of an Intergalactic Computer Network that might allow all computers, and thereby all the people using them, to connect as one. By 1969, Licklider’s successors had built an operational network after Baran’s conceptual design. Originally called the ARPANet, it would evolve into the internet, the now-humdrum infrastructure you are using to read this article.

Over the years, the internet’s decentralized design became a metaphor for its social and political ethos: Anyone could publish information of any kind, to anyone in the world, without the assent of central gatekeepers such as publishers and media networks. Tim Berners-Lee’s World Wide Web became the most successful interpretation of this ethos. Whether a goth-rock zine, a sex-toy business, a Rainbow Brite fan community, or anything else, you could publish it to the world on the web.

For a time, the infrastructural decentralization of the web matched that of its content and operations. Many people published to servers hosted at local providers; most folks still dialed up back then, and local phone calls were free. But as e-commerce and brochureware evolved into blogs, a problem arose: distributed publishing still required a lot of specialized expertise. You had to know how to connect to servers, upload files, write markup and maybe some code, and so on. Those capacities were always rarefied.

[Read: The people who hated the web even before Facebook]

So the centralization began. Blogs, which once required installing software on your own server, became household services such as Blogger, Typepad, WordPress, and Tumblr. Social-media services—Facebook, Twitter, Instagram, Snapchat, TikTok, and the rest—vied for user acquisition, mostly to build advertising-sales businesses to target their massive audiences. The services began designing for compulsion, because more attention makes advertising more valuable. Connections, such as friends on Facebook, followers on Twitter, colleagues on LinkedIn, all became attached to those platforms rather than owned by individuals, and earlier efforts to decentralize those relationships effectively vanished. Even email, once local to ISPs or employers, became centralized in services such as Hotmail and Gmail.

In the process, people and the materials they create became the pawns of technology giants. Some of the blog platforms were acquired by larger tech companies (Blogger by Google, Tumblr by Yahoo), and with those roll-ups came more of the gatekeeping (especially for sexually explicit material) that decentralization had supposedly erased. One of the most urgent questions in today’s information wars surrounds how—not whether—Facebook should act as a gatekeeper of content across its massive, centralized global network. “Content,” in fact, might be the most instructive fossil of this age, a term that now describes anything people make online, including the how-to videos of amateur crafters, the articles that journalists write, and policy pronouncements by world leaders. Whereas one might have once been a writer or a photographer or even a pornographer, now publishing is a process of filling the empty, preformed vessels provided by giant corporations. A thousand flowers still bloom on this global network, but all of them rely on, and return spoils to, a handful of nodes, just as communications systems did before the ARPAnet.

Nuclear war is less of a threat than it was in 1963, but the risks of centralized communications systems have persisted and even worsened. By contrast, centralized online services such as Twitter and Facebook have become vectors for disinformation and conspiracy, conditions that have altered democracy, perhaps forever. A global, decentralized network promised to connect everyone, as its proponents had dreamed it would. But those connections made the network itself dangerous, in a way Licklider and others hadn’t anticipated.

The cybersecurity implications of this sort of centralization are deeply unnerving. With titans of business, popular celebrities, and world leaders all using (and sometimes abusing) Twitter and other services, the ability to centrally attack and control any or all of those accounts—Taylor Swift’s or Donald Trump’s—could wreak far more havoc than a day of bitcoin fraud. What if businesses or elections were targeted instead?

The fact that the Twitter hack wasn't consequential further alienates the public from the risks of centralization in information infrastructure. Most Twitter users probably didn’t even notice the drama. As far as we know, the few who were hacked suffered limited ill effects. And the low-grade power users, like me, who were caught in the crossfire either got their account back and carried on as before or didn’t (yet) and amount to uncounted casualties of centralized communications.

One of those casualties is another of my Atlantic colleagues, Ellen Cushing. She had been on the outs with Twitter for some time, she told me, and just decided not to bother regaining control of her account. Instead, she’s rekindled an interest in media-outlet homepages. But already, Cushing has realized what she’s missing: the view of what’s happening now that Twitter uniquely offers. Twitter got its wish of becoming the place people go for real-time updates on news. But that also means that when it fails, as it did during this screen-name hack, part of our communications infrastructure also fails. Twitter isn’t just a place for memes or news, or even presidential press releases meted out in little chunks. It’s where the weather service and the bank and your kid’s school go to share moment-to-moment updates. Though seemingly inessential, it has braided itself into contemporary life in a way that also makes it vital.

[Read: The Twitter hacks have to stop]

That leaves us with a worst-of-all-worlds situation. The physical and logical infrastructure that helped communications networks avoid catastrophic failure has devolved back into a command-and-control model. At the same time, public reliance on those networks has deepened and intensified. Together, Facebook (which owns Instagram and WhatsApp), Google (which includes YouTube), Twitter, and Snap are worth a combined $1.7 trillion. In China, where Western services are banned, WeChat, Weibo, and Qzone count roughly 2 billion users among them. The drive to “scale” technology businesses has consolidated their wealth and power, but it has also increased their exposure. Bad actors target Facebook and Twitter for disinformation precisely because those services can facilitate its widespread dissemination. In retrospect, Licklider’s Intergalactic Computer Network should have sounded like an alien threat from the start.

But even after years of breaches and threats on Facebook, Twitter, YouTube, and beyond, tech can’t shake its obsession with centralization. The Facebook and Palantir Technologies investor Peter Theil has exalted monopoly as the apotheosis of business. Content-creation start-ups enter the market not to break the grip of Google or Facebook, but in hopes of being acquired and rolled up into them. New social networks, such as TikTok, capture novel attention among younger audiences who find Facebook unappealing, but they still operate from the same playbook: Capture as many users as possible on a single platform in order to monetize their attention.

In the process, new platforms introduce new risks too. Because it is based in China, U.S. officials fear that TikTok could be a national-security threat, used to spread disinformation or amass American citizens’ personal information. But rather than seeing the service as intrinsically threatening, as all its predecessors have been, some critics are tripping over themselves to celebrate TikTok as a charming new cultural trend. Writing at The Washington Post, Geoffrey A. Fowler compared national-security concerns about the service to xenophobia. During the Cold War, American policy responded to potential threats, even remote ones, with huge investments of money and manpower for disaster preparedness. The internet itself rose from that paranoid fog. Now we download first and ask questions later, if ever, until something goes terribly wrong—and maybe not even then.

The risks to online services are more numerous and varied than ever, especially now that they cater to billions of people all over the world. Somehow, the imagined consequences still seem minor—virtual real-estate or cryptocurrency grifts—even as the actual stakes are as fraught as, or worse than, they were half a century ago. The internet was invented to anticipate the aftermath of nuclear war, which thankfully never happened. But the information war that its technological progeny ignited happens every day, even if you can’t log in to Twitter to see it.

Read the whole story
15 days ago
Share this story

Michael Brown, Eric Garner, and the ‘Other America’


Growth and progress could be this nation's reward for facing the challenge of our times with courage and a demand for equal justice. The American Revolution, the Civil War, the Great Depression, and the civil-rights movement of the 1960s were moments when the United States could have been torn from its very foundation, but a creative response to this turmoil helped move the nation forward.

At its best, non-violent protest is a strategically engineered crisis designed to wake up a sleeping nation, to educate and sensitize those who become awakened, and to ignite a sense of righteous indignation in people of goodwill to press for transformation. That's what the protests galvanized by the deaths of Michael Brown, Eric Garner, Trayvon Martin, and others are trying to accomplish.

Many Americans find themselves at a loss to understand the depth of the anger and frustration of the protestors. It might be worthwhile for them to read a speech Dr. Martin Luther King Jr. delivered on April 14, 1967, at Stanford University. A colleague of mine in Congress reminded me of his words, and I find they ring as true today as they did almost 50 years ago.

In the speech, King describes what he calls the "other America," one of two starkly different American experiences that exist side-by-side. One people "experience the opportunity of life, liberty, and the pursuit of happiness in all its dimensions," and the other a "daily ugliness" that spoils the purest hopes of the young and old, leaving only "the fatigue of despair." The Brown and Garner cases themselves are not the only focus of the protestors' grievances, but they represent a glimpse of a different America most Americans have found it inconvenient to confront.

One group of people in this country can expect the institutions of government to bend in their favor, no matter that they are supposedly regulated by impartial law. In the other, children, fathers, mothers, uncles, grandfathers, whole families, and many generations are swept up like rubbish by the hard, unforgiving hand of the law.

They are offered no lenience, even for petty offenses, in a system that seems hell-bent on warehousing them by the millions of people, while others escape the consequences of pervasive malfeasance scot-free. Some people rationalize that it was unfortunate, but not altogether disturbing, that Michael Brown was put to death without due process because, after all, he allegedly took some cigarillos from a corner store. But who went to jail for the mortgage fraud that robbed his community and other black communities around the country of 50 percent of their wealth?

Should people accused of stealing be held accountable? Definitely. But the justice system entangles the most vulnerable so effectively that even the innocent often find it easier to just plead guilty. Meanwhile the capable, and sometimes the stealthiest and most damaging, are slapped on the wrist and given a pass.

If Americans are to be honest with themselves, they must admit we may never know what actually happened to Michael Brown because of the unusual way the grand-jury process was conducted by a local prosecutor whose independence was in doubt. They must admit that publishing a selective collection of details online corrupts the integrity of grand-jury deliberations and proceedings meant to be held in confidence. It subverts a judicial process designed to air the arguments of both sides—the victim and the perpetrator—exposing them both to challenge and cross-examination.

Denying any victim of homicide the right to a public trial is a painful outcome, but to distort the process and use it to achieve that goal compounds the tragedy of homicide with robbery. It's no wonder then that even videotaped evidence showing Eric Garner pleading to breathe 11 times would lead to no indictment. It proves the protestors' point—in some courts even the worst offenders can go free as long as they wear a badge.  

Don't get me wrong—I work with police everyday. Whenever I see them, I let them know I appreciate their service. The job is difficult, and there are many responsible officers, but does that mean they should avoid scrutiny when they take a human life, especially under questionable circumstances? Isn't that the law they are supposed to defend?

Thousands of people—young and old, black, white, and brown—are speaking to the nation.  They are "dying in" to shake it out of denial. They are saying that American society is blind to hundreds, even thousands of murders perpetrated in its name by agents of governments. They are saying that blood is on the hands of the nation and its people. (Black-on-black crime, or white-on-white crime for that matter, is an important but different discussion, and it does not justify what is done by agents with the presumed consent of society.)

Today's protestors demand that Americans confront several questions as a national community:

Is it all right with them that police kill hundreds of unarmed teens and young men every year without having to account for their actions? Do they mind that a retired veteran who accidentally pressed his medical-alert button is now dead at the hands of police? Or that a 12-year-old boy playing with a toy gun in a park near his home, a 22-year-old man talking on a cell phone in a Walmart, a 17-year-old walking home from the corner store, an unarmed 23-year-old man attending his own bachelor party shot 50 times, or a 7-year-old girl at home asleep in her bed were all killed by their representatives? One recent study reports that one black man is killed by police or vigilantes in our country every 28 hours, almost one a day.

Doesn't that bother you?

Ever since black men first came to these shores we have been targets of wanton aggression. We have been maimed, drugged, lynched, burned, jailed, enslaved, chained, disfigured, dismembered, drowned, shot, and killed. As a black man, I have to ask why. What is it that drives this carnage? Is it fear? Fear of what? Why is this nation still so willing to suspend the compassion it gives freely to others when the victims are men who are black or brown?  

Soon the nation will celebrate the 50th anniversary of Bloody Sunday, the day unarmed, nonviolent protestors were brutalized by deputized citizens and Alabama state troopers on the Edmund Pettus Bridge. As a leader of that march, I wonder, if the same attack took place in Ferguson today, would Americans be shocked enough to do anything about it? What has happened to the soul of America that makes citizens more interested in justifying these murders than stopping them?

Dr. King declared in his 1967 speech, "Racism is evil because its ultimate logic leads to genocide .... It is an affirmation," he said, "that the very being of a people is inferior," and therefore unworthy of the same regard as other human life. Do Americans accept the deaths of hundreds and thousands of young men and boys simply because they are black? Ignorance of their day-to-day lives is no excuse for what is done in society’s name.

In the presence of injustice, no one has the right to be silent. Members of government and the business, faith, and even law-enforcement communities must stand up and say enough is enough. Let the young lives of Michael Brown, Eric Garner, Sean Bell, John Crawford, and Trayvon Martin serve a higher purpose to shine the light of truth on our democracy and challenge us to meet the demand for equal justice in America.

There is a growing discontent in this country. And if the fires of frustration and discontent continue to grow without redress, I fear for the future of this country. There will not be peace in America. I do not condone violence under any circumstance. It does not lead to lasting change. I do not condone either public rioting or state-sponsored terrorism. "True peace," King would tell us, "is not merely the absence of tension; it is the presence of justice."

Read the whole story
22 days ago
Share this story

Will COVID-19 Spark a Moral Revolution? Eight Possibilities

1 Share

The dust has not settled yet, and it may not settle for some time, but already people are wondering what kind of society we will have once the COVID-19 crisis comes to an end. Some are excited by the “imaginative possibilities” it opens up; some are concerned that it challenges our existing moral frameworks; others are worried about the slippery slope to authoritarianism and social control.

I am interested in this too. I am particularly interested in whether the disruptions and adjustments necessitated by COVID-19 will spark a moral revolution. In other words, will it change our moral beliefs and practices in a significant way? There is no doubt that our civilisation has been shaken to its core and new potentialities are tantalisingly being revealed in the space of moral possibility. But which way will we shift and rebalance ourselves?

There are many ‘thinkpieces’ out there already that offer some opinions on these questions. In this article, I want to take a step back and think about the issue in a more systematic way. I do so in three stages. First, I discuss the general phenomenon of a ‘moral revolution’. What does it mean to say that morality has been revolutionised? How can we tell that we have undergone a moral revolution? Second, I discuss the various ways in which COVID-19 and our response to it may change our moral beliefs and practices. I don’t offer definitive opinions but, rather, try to survey the various possibilities in a reasonably comprehensive fashion (with the caveat that nothing is ever truly comprehensive). Third, and finally, I offer some reasons to be sceptical about the prospects of a genuine moral revolution resulting from COVID-19.

1. What is a moral revolution anyway?
It’s important to be somewhat precise about the concept of a moral revolution at the outset. If we aren’t, then we won’t know whether to classify some social change made in response to COVID-19 as a genuine moral revolution.

It helps if we start with the concept of ‘morality’ itself. Moral philosophers often adopt a normative view of morality. For them, morality is the set of rules and theories that describes what is good/bad and right/wrong. Although there are plenty of philosophers who are sceptics and nihilists about the possibility of moral truth, there are also plenty who are moral realists and believe that there are correct moral theories and rules that do not change over time or depend on what people do or believe. For these people the idea of moral revolution might sound nonsensical. Morality is not something that changes or alters over time: it is something that is already waiting out there to be discovered by our reason.

Social theorists and psychologists often adopt a more descriptive view of morality. For them, morality is the set of socially accepted rules and theories that people use to determine what is good/bad and right/wrong. There is no guarantee that these socially accepted theories are correct. They can, and in fact often do, change over time. For example, once upon time many people thought it was morally acceptable to own slaves. Most people now reject this belief. What happened? There was a revolution in social morality. What people once deemed permissible was rejected as impermissible; what people once thought was good was categorised as bad. The evaluative standards that lay at the heart of our social moral consciousness shifted.

Thinking about moral revolutions makes most sense if you adopt this descriptive perspective. Moral revolutions are changes in social moral consciousness. They are not simply changes in behaviour. After all changes in behaviour can be enforced through authoritarian control without any underlying change in social morality. One day a dictator could declare that homosexuality is morally abhorrent. He could enforce this decree by banning homosexual relations and instituting harsh punishments. This might change people’s behaviours but it wouldn’t necessarily change their moral consciousness: they might continue to believe that homosexuality is morally acceptable. It’s only if there are changes in associated beliefs that there is a genuine moral revolution.

How do changes in moral beliefs take place? How do revolutions get started? There are divergent answers to those questions. Kwame Anthony Appiah, for example, has proposed that moral revolutions are catalysed by changing beliefs about the nature of honour. This is because perceptions of honour play a key role in our moral psychology: we are motivated to do the things that we perceive to be honourable. He may be on to something with this. I would, suggest, however a less precise and more abstract set of mechanisms. Moral revolutions start when there is some 'shock' to social order. This could be internal (endogenous) or external (exogenous), or a bit of both. The COVID-19 pandemic would seem to be a largely exogenous shock (though it was certainly encouraged by practices that are inherent to modern industrial-agricultural society). This shock prompts or forces new behaviours and new styles of thinking. We have to make sense of our new reality. To do this we go to the existing pool of moral ideas, theories and concepts (which is vast). Ideas emerge from this pool that help to justify, reinforce or control the new reality. This leads to a refinement of our moral consciousness and, if the process continues in the right way, a moral revolution.

Should we be careful about using the term ‘revolution’ in this context? One of the most thoughtful and well worked-out theories of moral revolution can be found in Robert Baker’s book The Structure of Moral Revolutions (which I discussed here). Using Thomas Kuhn’s theory of scientific revolutions as his model, Baker argues that we should distinguish between three types of changes in social morality: (i) revolutions which involve some intentional change in a general moral paradigm (e.g. a change in an abstract normative theory or principle like a shift to utilitarianism in lieu of traditional Christian morality); (ii) reforms which involve some intentional change in moral beliefs that are less far-reaching than revolutions (e.g. changes in what we believe about the morality of homosexuality without a change in an underlying paradigm) and (iii) drifts which are non-intentional changes in social morality.

Baker’s theory is an interesting one but I think these distinctions are unnecessary and unnecessarily complicated. While it might be intellectually interesting to classify different kinds of moral change depending on their directedness or gravity I suspect that in most cases what we really care about is whether there has been some change in morality at all and not whether it was intentionally directed or whether it involved a change to a moral paradigm as opposed to a less central moral belief. In any event, I won’t be too precious about how I use the term ‘revolution’ in the remainder of this article. I will use it to refer to any noteworthy change in social moral beliefs.

It’s worth saying one thing about the importance of individual humans in instituting moral revolutions. Michelle Moody Adams — who has written extensively about the idea of moral progress and change — has argued that certain individuals (moral visionaries) often play a key role in moral revolutions. These are people who see new moral possibilities in the world, who reframe and reevaluate events and behaviours in a way that casts them in a new moral light, and refine and expand existing norms and theories. These people often lead moral revolutions through their use of language and their ability to use language to redescribe and recategorise our moral predicament. A good example might be Martin Luther King Jr who, although not the only one to speak out about the injustices visited upon the black population of the United States, managed to do so in a particularly effective way, using some arresting metaphors and articulating a vision (a “dream”) for our moral future. Similarly, some feminist activists, such as Catharine MacKinnon, have played a key role in redescribing unwanted flirtation or interest in the workplace as sexual harassment. In doing this, they managed to ‘see’ something that other people missed (the sexual harassment example is one that Moody Adams relies upon in her work).

One thing we might be curious about, as we now turn to consider the possible moral revolutions that might be kicked off by the COVID 19 pandemic, is whether there are any such moral visionaries at work at present. Are we being guided to a new moral paradigm by their insights and leadership? I’ll return to this question later.

2. How Might COVID 19 Revolutionise Morality?
COVID-19 is altering many of our daily habits and practices. Many people have lost their jobs and become reliant on the state for survival. Many people have been forced to work from home and interact with people online instead of in the real world. Many working parents have suddenly realised how exhausting it is to look after children full time. And so on. The changes are everywhere. Will any of them spark a moral revolution? In what follows I will briefly sketch eight nascent moral revolutions that might be precipitated by the current pandemic. At the moment, most of these nascent revolutions are either taking place at the level of behavioural change or, in some cases, are just mere possibilities that seem to be encouraged by the dynamics of the pandemic. None of them really seems to involve a change in social moral consciousness. At least not yet.

(a) Hyper-Utilitarianism
The first nascent moral revolution involves a shift to a hyper-utilitarian social ethic. I recently saw an interesting comment on Twitter (I think it was from Diana Fleischmann but I cannot remember). It went something to the effect of “just as there are no atheists in foxholes so too there are no non-consequentialists in triage”. This was a comment on some of the stark decisions being forced upon doctors and healthcare workers in the midst of the pandemic surge. These decisions have occurred most visibly in Italy and New York. With a limited supply of medical resources to go around, healthcare workers have been forced to make essentially utilitarian calculations about which patient is worth saving. Many times this has come down to saving younger people at the expense of older people. It’s a little more complicated than that, of course, — if you are interested in the topic I did a whole podcast episode about it with Lars Sandman — but it is still noteworthy that maximising the number of life years saved is one of the more dominant criteria being used to ration healthcare.

Furthermore, it is not just in the healthcare context that the utilitarian approach is in the ascendancy. We also see it, more broadly, in the economic sphere. We are now classifying workers based on whether they are essential or not. Healthcare workers? Grocery store workers? Sanitation workers? All essential. University professors? Beauticians? Baristas? Sadly non-essential. When we lift lockdown orders and try to return our societies to something like the pre-pandemic reality, we will also be forced to make such calculations. Some people will be deemed more essential — more socially important — than others and allowed back sooner. Some might never be allowed back (cruise ship captains?).

You might argue that this hyper-utilitarian mode of thinking is just being forced on us by the crisis. We will abandon it when we get a chance. You might also argue that behind the general lockdown orders lurks the dignitarian principle that every life is sacred and worth saving. But could this just be a fiction that is no longer sustainable in the face of necessary utilitarianism? Societies are clearly making choices that some lives are more important than others. We have always done this to some extent. Indeed, it may be unavoidable. But the pandemic might be forcing the utilitarian choices into the open in an unprecedented way. It might be like pointing out that the emperor has no clothes. We cannot unsee what we have now seen.

The end result might be that society re-emerges from the crisis embracing a hyper-utilitarian view in which it is acceptable to rank and prioritise lives according to some metric of relative worth. I am not sure what this will mean in practice but one possibility would be that we adopt an extreme version of the Chinese social credit system, wherein everyone is given a rating based on their worth to society or their contribution to the common good and are then selectively exposed to social benefits and burdens.

(b) The End of Work
Closely related to the above, is the possibility that the pandemic might be revealing how many things we thought were socially necessary are, in fact, optional and arbitrary. Close to my own heart — given that I wrote a whole book critiquing the role of work in our lives — is the fact that the crisis might be highlighting how unnecessary certain forms of work really are. We tend to moralise work and think that it plays an important role in our well-being. The fact that many are now forced to accept that their work is non-essential and have to take an involuntary break from it, might encourage a rethink. Maybe we shouldn’t moralise our work so much?

This is certainly true for me. I am now working from home, as is my partner. We are fortunate that this is possible. We have a young daughter and she needs to be looked after. So we don’t work full time. We work, at most, half time instead. I know of many parents who have to do this (single parents, of course, face tougher choices). For me, working half time has made me realise how little of what I do is important and how it is possible to do most of what I need to do in far less time than it it used to take. In a sense, I could work half time all the time and no one would know the difference (though, ssshh, don’t tell my employer this!). Furthermore, I now appreciate how lucky I am to spend so much time with my daughter as she navigates the first few months of her life. In a country with limited paternity leave, I am being given a taste of something I would not ordinarily have (though, I am not going to lie it is pretty draining sometimes).

This is one reason why some of the groups that have long been advocating for a reduction in the working week, such as Autonomy UK, see the crisis as an opportunity for their movement. The changes to working habits and practices may force a change in moral consciousness around work. Perhaps it shouldn’t occupy such a central role in our lives?

That said, I would be cautious about the possibility of a genuine moral revolution around work. The present circumstances are a less-than-ideal natural experiment for the possibility of a post-work economy. Living in lockdown, not being allowed to visit friends and family, travel to the beach or countryside, or participate in rewarding leisure activities, means that people might not see the current predicament as better than work. Indeed, I already hear rumblings to this effect in my peer group. People are now saying that they can’t wait to get back to the office. I suspect this is largely because we all need a break from our families from time to time. If we weren’t required to comply with public health orders, this would be possible. But since it is not going to be possible until this ends, people might learn the wrong lesson from this ordeal. They may redouble their commitment to the work ethic not slacken it.

(c) Renegotiated Social Contract
Another thing the present crisis has revealed are the inadequacies and inequalities inherent in many societies. The disease itself strikes some people (notably the elderly) more harshly than others. The associated economic shock has hit some people and some countries harder than others.

The ’social contract’ is the term moral and political philosophers use to describe the agreement we have in society about how rights get protected and goods get distributed. It’s a bit of a fiction, of course, but it seems plausible to suggest that there is some kind of ongoing negotiation about this in every society. No matter where you live, the COVID-19 pandemic is testing the social contract. Governments have to scramble to decide how to prioritise lives and well-being and how to compensate for economic losses. Some governments have responded in a remarkably dynamic way: significantly increasing healthcare capacity, welfare payments to those who have lost their jobs and support for businesses that are struggling. One of the most notable developments is that many countries have switched to something pretty close to a universal basic income for all citizens, at least over the short term.

Is there an opportunity for revolution here? This is one of the features of the present crisis that has been most remarked upon. Amartya Sen has penned an op-ed suggesting that there is an opportunity to build a more communitarian and equal society as we come out of the pandemic. We have now seen what it is possible for governments to do when their backs are against the wall. Perhaps some of these changes can become more permanent? Sen is cautiously optimistic on this front, though admits that past crises didn’t always leave lasting changes.

Some people are less optimistic and revolutionary in their outlook. John Authers wrote an interesting piece for Bloomberg in which he argued that the current pandemic was testing our moral frameworks but that we were, ultimately, favouring a basically Rawlsian maximin approach to the social contract: raise the floor for the most vulnerable. This wouldn’t be a revolution since, according to Authers, the Rawlsian view has been dominant for some time. I’m not sure about this but I accept that there is plenty of opposition to the more radical egalitarian and communitarian possibilities inherent in the present crisis. For example, one of the reasons why some politicians, noticeably in the US, have opposed more dramatic reforms to social welfare is that they fear that these changes will become permanent. It’s as if they are anticipating the revolution and trying to preempt it.

(d) The New New Death of Privacy
In the late 1990s and early 2000s there was a lot of talk about the death of privacy. The belief was that as surveillance technology became more widespread people would, to a large extent, trade their privacy for other conveniences, e.g. cheaper and more efficient services. Some optimists argued that people would insure themselves against the loss of personal privacy by turning surveillance technologies back on those who might abuse their power (the so-called ‘sousveillance’ approach).

To some extent, this revolution in our attitude to privacy has come to pass. Surveillance technologies definitely are more widespread and people do often trade personal privacy for other conveniences. Nevertheless, there has been a significant retrenchment from the optimistic, ‘death of privacy’, view in recent years. Furthermore, privacy activists have won some notable legal battles, particularly in Europe, ensuring the protection of privacy in the digital age.

This retrenchment may now come to an end. It has become clear that one of primary tools that governments have used — and plan to use — to resolve the COVID-19 pandemic is increased surveillance and control. Identifying those who are infectious, and those they might have come into contact with, and isolating them from everyone else is the only viable long-term solution to the pandemic in the absence of an effective cure or vaccine. This requires testing individuals and recording their healthcare data. It also requires tracking and controlling people. This is likely true even if people voluntarily commit to isolation and quarantine. This could be done manually (i.e. by individual case workers) or it may, in some cases, be done through some kind of digital tracking and tracing. In fact, many governments are encouraging digital solutions to the problem, partly because they are seen to be more efficient and scalable, and partly because we live in an age where this kind of technological solutionism is favoured. This is to say nothing, incidentally, about the kinds of surveillance and control that will be favoured by private corporations in their effort to ensure safe and productive working environments.

What this could mean, in practice, is that we will witness the new death of privacy. Faced with a choice between the inconveniences of lockdown and the intrusiveness of surveillance and tracking, many people will choose the latter. That’s if they even get a choice. Some governments will choose (and some already have chosen) to impose surveillance technologies on their populations in an effort to get their economies back to some level of functionality; some companies will require employees to do so before they can return to work. It’s hard to see how privacy can be sustained in light of all this unless we get an effective treatment and vaccine and even then we can expect some recording and tracking of healthcare data (e.g. through immunity passports).

The issue is complicated. There are those that argue that the choice between privacy and public health is a false one. There are those that argue that digital contact tracing simply will not work. I discussed these issues in my podcast with Carissa Véliz. Maybe these voices will be heard and privacy will not go into the dying light just yet. But it certainly looks like it might be on life support once more. How many more battles can it win?

(e) The Uncertain Fate of Universalism and Cosmopolitanism
A common theme in books written about moral change is the sense that creeping universalism is the hallmark of moral progress. Humanity started out in small bands and tribes. We owed moral duties to members of our tribes but not to outsiders. They were not ‘one of us’. This made a certain amount of ruthless sense in a world of precarious living conditions and scarce resources. As society grew more technologically complex, and as the social surplus made possible by technology grew, the pressure eased and the moral circle started to expand. More and more people were seen to be ‘one of us’. It hasn’t all been plane sailing, of course, but the recent high watermark in this trend came, perhaps, in the post-WWII era with the rise of global institutions and the recognition of universal human rights.

What’s going to happen in the post COVID-19 world? It seems like we are poised on the precipice and could go in either direction. On the one hand, we will need greater global coordination and cooperation to both resolve this pandemic and prevent the next one. So we could be on the cusp of even greater global cooperation and solidarity. On the other hand, infectious diseases, almost by necessity, tend to breed suspicion of others. Others are a threat since they could be carrying the disease. Borders are being shut down to prevent the spread. We are asked to distance ourselves from one another. The sense that the disease originated in a specific country (China) also fosters suspicion and antipathy toward foreigners.

I am not sure which way we are going to go. I have certainly felt my own world shrinking quite a bit over the past few weeks. It’s hard to maintain a globalist and cosmopolitan outlook when you limit your movements and contacts so much. When I go for a walk I find myself wary of others: are they getting too close? Why aren’t they abiding by social distancing rules? But when I go online and read opinions from around the world I do also sense some greater solidarity emerging, particularly in academic and research communities. The only problem is that they have always tended to be more cosmopolitan and globalist in their outlook.

(f) Return of a Disgust Based Morality
According to Jonathan Haidt’s influential theory of moral foundations one of the five (or six) basic moral parameters used to shape our social moral consciousness is that of disgust. This gives rise to the perception that some people, foods, places, and actions are ‘unclean’ and ‘impure’. It also gives rise to an associated set of purity and cleanliness norms. These can be odd, but relatively innocuous, when applied to rituals around food and personal hygiene. They can be pernicious and exclusionary when applied to people and, classically, sexual practices. One of Haidt’s claims is that disgust-based morality is more prevalent in traditional and conservative moral communities. Modern, liberal moral communities seem to have abandoned it in favour of a social ethic based primarily on harm and fairness.

It seems plausible to me to suggest that the COVID-19 pandemic will provide an opportunity for a disgust-based morality to get a foothold in modern liberal societies once more. To some extent this could be positive. Better policing of norms around personal hygiene (hand-washing) and social hygiene (mask-wearing) could genuinely reduce the spread of infectious disease, thereby limiting loss of life. At the same time, there could be pernicious effects as some people and practices that are, in fact, innocuous are perceived to be ‘unclean’ or ‘disgusting’ and so must be ‘purged’ from our communities. This could help to support the retrenchment from universalism and cosmopolitanism that I outlined above.

(g) Animal Ethics and the One Health Approach
One thing the COVID-19 pandemic clearly places under the spotlight is our relationship with animals. It’s very clear, if you read the research on viruses and pandemics, that zoonoses like Sars-CoV2 are hastened by how we choose to control and live with animal populations. Many viruses that are deadly to humans jump from animals (where they are relatively innocuous) to humans (yes, I know we are ‘animals’ too). Living in close proximity to animals, killing them and eating them allows this to happen with relative frequency.

The Wuhan wet market has been pinpointed (though this is disputed) as the origin point for this particular outbreak. Wet markets of this sort are notable for the fact that they contain wild and exotic animals that are slaughtered onsite and sold to humans. But it is not just wet markets that are to blame for the risk of viral pandemics. The entire system of animal agriculture has played its part. We breed animals in closed environments where infectious diseases can spread with ease ; we pump them full of anti-microbial drugs that encourages the growth of anti-microbial resistant strains; we destroy the natural homelands of wild animals, forcing them to migrate into closer proximity with us. (This is something discussed in more detail in my podcast with Jeff Sebo).

Epidemiologists have long noted that this is a recipe for disaster. A ticking time bomb that was set to explode at any time. The best solution is to adopt a ‘one health’ approach to the world whereby we see our fates as inextricably intertwined with the fate of our animal populations. As the COVID-19 pandemic makes the wisdom of the one health approach more obvious it also provides an opportunity for an enhanced animal ethics. Maybe we will now realise that we have moral duties to animals and take these duties seriously.

(h) An Ethic of Existential Risk
One final possible moral revolution concerns our attitude to existential risk. I am not going to debate the precise definition of this concept (Toby Ord’s recent book The Precipice offers a highly restrictive definition of the concept). I am just going to submit that an existential risk is a one that threatens a lot of harm to human civilisation. A highly lethal global pandemic has long been touted as a potential existential risk.

Right now we are living with a pandemic that is, fortunately, not as lethal as it could have been. Nevertheless, the fact that we had a close call this time around could change our attitude to all those other existential risks that people have been harping on about for some time: bioweapons, nuclear war, global warming, supervolcanoes, artificial superintelligence and so on. Maybe now we will take them much more seriously? In other words, maybe we will emerge from this pandemic with a social moral consciousness that is more attuned to existential risk and more willing to take decisive preventive action.

These are the eight nascent moral revolutions that occurred to me. I am sure that I could identify more if I thought about it for longer. As you will see, there is plenty of uncertainty in my preceding remarks about the exact course these moral revolutions might take, if they come to pass. It should also be clear that I am not claiming that these revolutions will be positive. Some of them might be quite negative. We are working with a descriptive understanding of social morality; not a normative one. Where we shift to in the space of possible social morality could be good or bad, depending on your normative commitments.

3. Conclusion: Preventing the Moral Revolution
I mentioned at the outset that the COVID-19 pandemic has shaken society out of its equilibrium. If you look around you can now glimpse tantalising new possibilities in the landscape of possible moral futures. I want to conclude by briefly mentioning three ways in which these revolutions might never get off the ground; in which we settle back into our old patterns.

First, the pandemic might just be a ‘short sharp shock’. There will be no second or third wave. We won’t be living with it for the next 18-24 months. It will just be that weird spring — you remember the one — where we all stayed at home for 6-12 weeks, drove our families a bit made, but got through it all okay. Assuming we didn’t work on the frontline, or lose a loved one to the disease, or get infected ourselves, we will just look back on it as a nice holiday from our ordinary lives. Not something worth changing our moral beliefs over.

Second, the pandemic might be a source of collective shame — something we would all much rather forget. This is something I discussed in my podcast with Michael Cholbi. It has been noted by other historians and commentators looking at past pandemics. The 1918 flu pandemic, for example, was largely ignored until recent times, perhaps because it didn’t show humanity in its best light. The same could happen this time around. In the fight for survival we might become more insular, selfish and scared. We might like to move away from the people we were in the midst of the pandemic and return to what it was like before.

(There is a bit of a paradox here. Others have pointed it out: If we are very successful in flattening the curve and suppressing the virus we might think we overreacted and that there was nothing truly revolutionary about the pandemic. This might encourage the belief that there is no need to change who we are or what we do. If we are unsuccessful and the virus spreads and kills millions, we might like to forget about it. It’s only if we land somewhere in between these extremes that the revolutionary potential is most potent.)

Third, and finally, we might lack the requisite moral visionaries. As noted above, we need people — individually and collectively — to identify the new moral possibilities and articulate them in a compelling and engaging way. If the moral visionaries do not emerge, we might not realise what needs to change.

Read the whole story
25 days ago
Share this story
Next Page of Stories